Are people allowed to treat their robots anyhow they like? Can a person be “violent” to their robot? Should a robot be able to join a labour union? Can robots be subject of enslavement? The utilization of robots cuts across almost all segments of society. Many robots are created to interact closely with humans and do tasks that are sometimes very personal to us. Robots have become a new class of social actors, initiating the concept of “machine behavior.” With many people anthropomorphizing robots in different ways, there is little or no definite stance on the kinds of rights that we should accord to them.
To answer the above questions, it is important to state that the laws on how robots can be treated mostly remains within their popular status as “product” or property. Their protection within the law is limited to their security from theft, vandalization, hacking, etc.
Robots are not human beings; therefore, they can’t be entitled to human rights. They are a product of human manufacture and have so far shown no conventional evidence of having emotions or feelings. An algorithm or robot cannot feel pain or become a subject of enslavement because they have no biological corpus that can be subjected to torture. They also have no life (as in being) to be susceptible to suffering or trauma. Therefore, the idea that AI or robots can be “enslaved” may be quite off. Most robots are created to primarily carry out laborious functions and serve humans. Their value is mostly attached to the service and function they provide and so using them in an “exploitative” manner (such as compelling or overworking them) is not only legal but within the functions of why they exist. As humans on the other hand; irrespective of who we are, or whether we are of any use to society, our value is tied to our intrinsic self-worth (and not the value that we provide). Therefore, by nature of the inalienable quality of our humanhood, we can become susceptible to forced labour, indignity and servitude. This generally cannot apply to robots as we know of them today.
However, some people have reported distress and trauma over witnessing the inhumane treatment of a robot. In Japan a robot was a subject of worship. The Kingdom of Saudi Arabia granted a citizenship to a female-looking robot called Sophia. Robots (and other inhuman objects) can be qualified with
humanistic values. The fact that a thing is not a human being does not mean it cannot be accorded reverence or rights-recognition. This may not necessarily be about the utility of the thing itself, but the sentimental value or attachment that humans give it. It could also be about the symbolism that the object carries and particularly how that representation affects their human rights or identity.
For example, some countries humanize their national flag. In some states, the law accords protection and rights to religious objects. In Pakistan, Somalia and Afghanistan for example, desecrating a book copy of your Quran could amount to a crime punishable with life imprisonment or even death. This protection could apply to AI and robots especially where the exploitation of same is qualified and proscribed. If a sovereign group of people with self-determination choose to accord a robot with rights (as it would a human being), then for them (and those bound by their law), such item (though insentient), may have similar protections as humans.
What does the way we treat robots say about us?
The way we treat robots is reflective of our human psychology. Does a person who keeps being “violent” to his robot pose a threat to other humans? Will hurling insults at a robot interfere with (or even pre-empt) our relationship with people? Expressing “robot-love” or “robot-abuse” could be an imitation of the psychological mechanism for our social inclusion. It could also reveal our vulnerability to capitalist manipulations. The way we treat robots could be tied to how our humanity engages the vision for singularity; and how it uncovers the lens with which we see ourselves. Can engaging more robots as “servants” replace human to human exploitation? Will it improve social behaviour? Whether sentient or not, robots should not be accorded the same status as humans, nor should they be “exploited” (as like slaves). They should be regarded and treated as they are: Robots.
This was an interesting read.
Robots are not human beings; therefore, they can’t be entitled to human rights. That’s it and nothing more.
This is really good read.
Interesting thoughts and a good, succinct article. The committee chair at the IEEE in one of the linked articles also sums it well.